Generative AI has matured to a point where large-scale models can generate text that seems indistinguishable from human-written text and remarkably photorealistic images. Automatically measuring how close the distribution of generated data is to the target real data distribution is a key step in diagnosing existing models and developing better models. We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images. These scores are statistical summaries of divergence frontiers capturing two types of errors in generative modeling. We explore four approaches to statistically estimate these scores: vector quantization, non-parametric estimation, classifier-based estimation, and parametric Gaussian approximations. We provide statistical bounds for the vector quantization approach. Empirically, we find that the proposed scores paired with a range of $f$-divergences and statistical estimation methods can quantify the gaps between the distributions of human-written text and those of modern neural language models by correlating with human judgments and identifying known properties of the generated texts. We conclude the paper by demonstrating its applications to other AI domains and discussing practical recommendations.
translated by 谷歌翻译
Pre-trained language models, despite their rapid advancements powered by scale, still fall short of robust commonsense capabilities. And yet, scale appears to be the winning recipe; after all, the largest models seem to have acquired the largest amount of commonsense capabilities. Or is it? In this paper, we investigate the possibility of a seemingly impossible match: can smaller language models with dismal commonsense capabilities (i.e., GPT-2), ever win over models that are orders of magnitude larger and better (i.e., GPT-3), if the smaller models are powered with novel commonsense distillation algorithms? The key intellectual question we ask here is whether it is possible, if at all, to design a learning algorithm that does not benefit from scale, yet leads to a competitive level of commonsense acquisition. In this work, we study the generative models of commonsense knowledge, focusing on the task of generating generics, statements of commonsense facts about everyday concepts, e.g., birds can fly. We introduce a novel commonsense distillation framework, I2D2, that loosely follows the Symbolic Knowledge Distillation of West et al. but breaks the dependence on the extreme-scale models as the teacher model by two innovations: (1) the novel adaptation of NeuroLogic Decoding to enhance the generation quality of the weak, off-the-shelf language models, and (2) self-imitation learning to iteratively learn from the model's own enhanced commonsense acquisition capabilities. Empirical results suggest that scale is not the only way, as novel algorithms can be a promising alternative. Moreover, our study leads to a new corpus of generics, Gen-A-Tomic, that is of the largest and highest quality available to date.
translated by 谷歌翻译
众包NLP数据集的反复挑战是,在制作示例时,人类作家通常会依靠重复的模式,从而导致缺乏语言多样性。我们介绍了一种基于工人和AI协作的数据集创建的新方法,该方法汇集了语言模型的生成力量和人类的评估力量。从现有的数据集,自然语言推理(NLI)的Multinli开始,我们的方法使用数据集制图自动识别示例来证明具有挑战性的推理模式,并指示GPT-3撰写具有相似模式的新示例。然后,机器生成的示例会自动过滤,并最终由人类人群工人修订和标记。最终的数据集Wanli由107,885个NLI示例组成,并在现有的NLI数据集上呈现出独特的经验优势。值得注意的是,培训有关Wanli的模型,而不是Multinli($ 4 $ $倍)可改善我们考虑的七个外域测试集的性能,包括汉斯(Hans)的11%和对抗性NLI的9%。此外,将Multinli与Wanli结合起来比将其与其他NLI增强集相结合更有效。我们的结果表明,自然语言生成技术的潜力是策划增强质量和多样性的NLP数据集。
translated by 谷歌翻译
大型语言模型越来越能够通过相对较少的特定任务的监督产生流畅的出现文本。但这些模型可以准确解释分类决策吗?我们考虑使用少量人写的例子(即,以几滴方式)生成自由文本解释的任务。我们发现(1)创作更高质量的例子,以提示导致更高质量的世代; (2)令人惊讶的是,在头到头比较中,人群公司通常更喜欢GPT-3生成的解释,以众包中包含的人性写入的解释。然而,Crowdworker评级也表明,虽然模型产生了事实,语法和充分的解释,但它们具有改进的空间,例如沿着提供新颖信息和支持标签的轴。我们创建了一种管道,该管道将GPT-3与监督过滤器结合起来,该过滤器通过二进制可接受性判断来包含人类循环。尽管具有重要的主观性内在的判断可接受性,但我们的方法能够始终如一地过滤人类可接受的GPT-3生成的解释。
translated by 谷歌翻译
语言的感知毒性可能会因某人的身份和信仰而有所不同,但是在收集有毒语言数据集时往往忽略这种变化,从而导致数据集和模型偏差。我们寻求理解谁,为什么,以及毒性注释的偏见背后。在两个在线研究中具有人口统计地和政治上的参与者,我们调查了注释者身份(世卫组织)和信仰的影响(为什么),从社会心理学研究中汲取仇恨言语,自由言论,种族主义信念,政治倾向等。我们解除了通过考虑三个特征的帖子作为毒性的毒性:反黑色语言,非洲裔美国英语(AAE)方言和粗俗。我们的结果显示了注释者身份和信仰之间的强有力的协会及其毒性评级。值得注意的是,更保守的注释者和那些对我们的种族信仰规模的评分的人不太可能对毒黑语言归因于毒性,但更有可能将AAE归因于毒性。我们还提供了一个案例研究,说明了流行的毒性检测系统的评级如何自然地反映特定的信念和观点。我们的调查结果要求社会变量中的毒性标签,这提高了对有毒语言注释和检测的巨大影响。
translated by 谷歌翻译
估计数据集的难度通常涉及将最新模型与人类进行比较;性能差距越大,据说数据集就越难。但是,这种比较几乎没有理解给定分布中的每个实例的难度,或者什么属性使给定模型的数据集难以进行。为了解决这些问题,我们将数据集难度框架 - W.R.T.模型$ \ MATHCAL {V} $ - 由于缺乏$ \ Mathcal {V} $ - $ \ textit {usable Information} $(Xu等,2019),其中较低的值表示更困难的数据集用于$ \ mathcal {v} $。我们进一步介绍了$ \ textit {pointSise $ \ mathcal {v} $ - 信息} $(pvi),以测量单个实例的难度W.R.T.给定的分布。虽然标准评估指标通常仅比较同一数据集的不同模型,但$ \ MATHCAL {V} $ - $ \ textit {usable Information} $ and PVI也允许相反:对于给定的模型$ \ Mathcal {v} $,我们,我们,我们可以比较同一数据集的不同数据集以及不同的实例/切片。此外,我们的框架可以通过输入的转换来解释不同的输入属性,我们用来在广泛使用的NLP基准中发现注释人工制品。
translated by 谷歌翻译
由于在开放式文本生成中取得了重大进展,衡量机器生成的文本是如何对人类语言的关键问题。我们介绍紫红色,一个开放式文本生成的比较措施,它直接将文本生成模型的学习分布与使用发散边界的分发进行了分布到人写的文本。淡紫色通过计算量化嵌入空间中的信息分流来缩放到现代文本生成模型。通过对三个开放式发电任务的广泛实证研究,我们发现紫红色标识了所生成文本的已知属性,天然存在模型大小,并与人类判断相关,而不是现有的分布评估度量的限制较少。
translated by 谷歌翻译
Language models pretrained on text from a wide variety of sources form the foundation of today's NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedical and computer science publications, news, and reviews) and eight classification tasks, showing that a second phase of pretraining indomain (domain-adaptive pretraining) leads to performance gains, under both high-and low-resource settings. Moreover, adapting to the task's unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining. Finally, we show that adapting to a task corpus augmented using simple data selection strategies is an effective alternative, especially when resources for domain-adaptive pretraining might be unavailable. Overall, we consistently find that multiphase adaptive pretraining offers large gains in task performance.
translated by 谷歌翻译
Large-scale datasets for natural language inference are created by presenting crowd workers with a sentence (premise), and asking them to generate three new sentences (hypotheses) that it entails, contradicts, or is logically neutral with respect to. We show that, in a significant portion of such data, this protocol leaves clues that make it possible to identify the label by looking only at the hypothesis, without observing the premise. Specifically, we show that a simple text categorization model can correctly classify the hypothesis alone in about 67% of SNLI (Bowman et al., 2015) and 53% of MultiNLI (Williams et al., 2018). Our analysis reveals that specific linguistic phenomena such as negation and vagueness are highly correlated with certain inference classes. Our findings suggest that the success of natural language inference models to date has been overestimated, and that the task remains a hard open problem.
translated by 谷歌翻译